UK to host AI Safety Summit in San Francisco
The UK is set to host an AI developer summit in San Francisco to establish how the commitments made in May at the AI Seoul Summit can be put into practice.
Earlier this year, the UK’s AI Safety Institute opened a San Francisco office, with the aim of bringing it closer to US AI firms such as OpenAI, Microsoft, and Anthropic.
This summit, to be held on 21 and 22 November, will focus on AI safety. A subsequent AI Action Summit in France, planned for February next year, will drill down on AI openness and governance, along with discussions over how it could shape workplaces and culture.
At the Korea summit, attending companies pledged to stop the deployment or development of any AI models if their potential risks couldn’t be kept under control. This latest meeting will see them explain how they intend to make sure their technology isn’t misused.
“The conference is a clear sign of the UK’s ambition to further the shared global mission to design practical and effective approaches to AI safety,” said Peter Kyle, secretary of state for science, innovation and technology.
“We’re just months away from the AI Action Summit, and the discussions in San Francisco will give companies a clear focus on where and how they can bolster their AI safety plans, building on the commitments they made in Seoul.”
The conference will follow hard on the heels of the first meeting of the International Network of AI Safety Institutes, announced yesterday, which will be held in San Francisco on 20 and 21 November.
This, according to the US Department of Commerce, will bring together AI experts from each member’s AI safety institute, or equivalent government-backed scientific office. Members include Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and US.
“With AI evolving at a rapid pace, we at the Department of Commerce, and across the Biden-Harris Administration, are pulling every lever. That includes close, thoughtful coordination with our allies and like-minded partners,” said US secretary of commerce Gina Raimondo.
“We want the rules of the road on AI to be underpinned by safety, security, and trust, which is why this convening is so important. I look forward to welcoming government scientists and technical experts from the International Network of AI Safety Institutes to the center of American digital innovation, as we run toward the next phase of global cooperation in advancing the science of AI safety.”
The world’s first AI Safety Summit was held at Bletchley Park in November 2023. Since then, nations around the world have been busily establishing their own AI safety testing bodies even as experts warn global AI agreement may be “impossible”.
The UK’s AI Safety Institute, which is organizing the San Francisco conference with the Centre for the Governance of AI, will call for proposals on developer safety plans, the future of AI model safety evaluations transparency and methods for setting out different risk thresholds.
Source link